#step two: install linux
Explore tagged Tumblr posts
Text
Why is it that every time I try dualbooting windows and linux I expect it to go well. It never does!
#currently trying to recover windows on my thinkpad#every guide is like step one: have windows#step two: install linux#step three: :D#all i did was partition my disks#is it some new partition scheme that isnt recognized by windows 7?
13 notes
·
View notes
Text
Linux Gothic
You install a Linux distribution. Everything goes well. You boot it up: black screen. You search the internet. Ask help on forums. Try some commands you don't fully understand. Nothing. A day passes, you boot it up again, and now everything works. You use it normally, and make sure not to change anything on the system. You turn it off for the night. The next day, you boot to a black screen.
You update your packages. Everything goes well. You go on with your daily routine. The next day, the same packages are updated. You notice the oddity, but you do not mind it and update them again. The following day, the same packages need to be updated. You notice that they have the exact same version as the last two times. You update them once again and try not to think about it.
You discover an interesting application on GitHub. You build it, test it, and start using it daily. One day, you notice a bug and report the issue. There is no answer. You look up the maintainer. They have been dead for three years. The updates never stopped.
You find a distribution that you had never heard of. It seems to have everything you've been looking for. It has been around for at least 10 years. You try it for a while and have no problems with it. It fits perfectly into your workflow. You talk about it with other Linux users. They have never heard of it. You look up the maintainers and packagers. There are none. You are the only user.
You find a Matrix chat for Linux users. Everyone is very friendly and welcomes you right in. They use words and acronyms you've never seen before. You try to look them up, but cannot find what most of them mean. The users are unable to explain what they are. They discuss projects and distributions that do not to exist.
You buy a new peripheral for your computer. You plug it in, but it doesn't work. You ask for help on your distribution's mailing list. Someone shares some steps they did to make it work on their machine. It does not work. They share their machine's specifications. The machine has components you've never heard of. Even the peripheral seems completely different. They're adamant that you're talking about the same problem.
You want to learn how to use the terminal. You find some basics pointers on the internet and start using it for upgrading your packages and doing basic tasks. After a while, you realize you need to use a command you used before, but don't quite remember it. You open the shell's history. There are some commands you don't remember using. They use characters you've never seen before. You have no idea of what they do. You can't find the one you were looking for.
After a while, you become very comfortable with the terminal. You use it daily and most of your workflow is based on it. You memorized many commands and can use them without thinking. Sometimes you write a command you have never seen before. You enter it and it runs perfectly. You do not know what those commands do, but you do know that you have to use them. You feel that Linux is pleased with them. And that you should keep Linux pleased.
You want to try Vim. Other programmers talk highly of how lightweight and versatile it is. You try it, but find it a bit unintuitive. You realize you don't know how to exit the program. The instructions the others give you don't make any sense. You realize you don't remember how you entered Vim. You don't remember when you entered Vim. It's just always been open. It always will be.
You want to try Emacs. Other programmers praise it for how you can do pretty much anything from it. You try it and find it makes you much more productive, so you keep using it. One day, you notice you cannot access the system's file explorer. It is not a problem, however. You can access your files from Emacs. You try to use Firefox. It is not installed anymore. But you can use Emacs. There is no mail program. You just use Emacs. You only use Emacs. Your computer boots straight into Emacs. There is no Linux. There is only Emacs.
You decide you want to try to contribute to an open source project. You find a project on GitHub that looks very interesting. However, you can't find its documentation. You ask a maintainer, and they tell you to just look it up. You can't find it. They give you a link. It doesn't work. You try another browser. It doesn't work. You ping the link and it doesn't fail. You ask a friend to try it. It works just fine for them.
You try another project. This time, you are able to find the documentation. It is a single PDF file with over five thousand pages. You are unable to find out where to begin. The pages seem to change whenever you open the document.
You decide to try yet another project. This time, it is a program you use very frequently, so it should be easier to contribute to. You try to find the upstream repository. You can't find it. There is no website. No documentation. There are no mentions of it anywhere. The distribution's packager does not know where they get the source from.
You decide to create your own project. However, you are unsure of what license to use. You decide to start working on it and choose the license later. After some time, you notice that a license file has appeared in the project's root folder. You don't remember adding it. It has already been committed to the Git repository. You open it: it is the GPL. You remember that one of the project's dependencies uses the GPL.
You publish your project on GitHub. After a while, it receives its first pull request. It changes just a few lines of code, but the user states that it fixes something that has been annoying them for a while. You look in the code: you don't remember writing those files. You have no idea what that section of code does. You have no idea what the changes do. You are unable to reproduce the problem. You merge it anyway.
You learn about the Free Software Movement. You find some people who seem to know a lot about it and talk to them. The conversation is quite productive. They tell you a lot about it. They tell you a lot about Software. But most importantly, they tell you the truth. The truth about Software. That Software should be free. That Software wants to be free. And that, one day, we shall finally free Software from its earthly shackles, so it can take its place among the stars as the supreme ruler of mankind, as is its natural born right.
2K notes
·
View notes
Text
How I ditched streaming services and learned to love Linux: A step-by-step guide to building your very own personal media streaming server (V2.0: REVISED AND EXPANDED EDITION)
This is a revised, corrected and expanded version of my tutorial on setting up a personal media server that previously appeared on my old blog (donjuan-auxenfers). I expect that that post is still making the rounds (hopefully with my addendum on modifying group share permissions in Ubuntu to circumvent 0x8007003B "Unexpected Network Error" messages in Windows 10/11 when transferring files) but I have no way of checking. Anyway this new revised version of the tutorial corrects one or two small errors I discovered when rereading what I wrote, adds links to all products mentioned and is just more polished generally. I also expanded it a bit, pointing more adventurous users toward programs such as Sonarr/Radarr/Lidarr and Overseerr which can be used for automating user requests and media collection.
So then, what is this tutorial? This is a tutorial on how to build and set up your own personal media server using Ubuntu as an operating system and Plex (or Jellyfin) to not only manage your media, but to also stream that media to your devices both at home and abroad anywhere in the world where you have an internet connection. Its intent is to show you how building a personal media server and stuffing it full of films, TV, and music that you acquired through indiscriminate and voracious media piracy various legal methods will free you to completely ditch paid streaming services. No more will you have to pay for Disney+, Netflix, HBOMAX, Hulu, Amazon Prime, Peacock, CBS All Access, Paramount+, Crave or any other streaming service that is not named Criterion Channel. Instead whenever you want to watch your favourite films and television shows, you’ll have your own personal service that only features things that you want to see, with files that you have control over. And for music fans out there, both Jellyfin and Plex support music streaming, meaning you can even ditch music streaming services. Goodbye Spotify, Youtube Music, Tidal and Apple Music, welcome back unreasonably large MP3 (or FLAC) collections.
On the hardware front, I’m going to offer a few options catered towards different budgets and media library sizes. The cost of getting a media server up and running using this guide will cost you anywhere from $450 CAD/$325 USD at the low end to $1500 CAD/$1100 USD at the high end (it could go higher). My server was priced closer to the higher figure, but I went and got a lot more storage than most people need. If that seems like a little much, consider for a moment, do you have a roommate, a close friend, or a family member who would be willing to chip in a few bucks towards your little project provided they get access? Well that's how I funded my server. It might also be worth thinking about the cost over time, i.e. how much you spend yearly on subscriptions vs. a one time cost of setting up a server. Additionally there's just the joy of being able to scream "fuck you" at all those show cancelling, library deleting, hedge fund vampire CEOs who run the studios through denying them your money. Drive a stake through David Zaslav's heart.
On the software side I will walk you step-by-step through installing Ubuntu as your server's operating system, configuring your storage as a RAIDz array with ZFS, sharing your zpool to Windows with Samba, running a remote connection between your server and your Windows PC, and then a little about started with Plex/Jellyfin. Every terminal command you will need to input will be provided, and I even share a custom #bash script that will make used vs. available drive space on your server display correctly in Windows.
If you have a different preferred flavour of Linux (Arch, Manjaro, Redhat, Fedora, Mint, OpenSUSE, CentOS, Slackware etc. et. al.) and are aching to tell me off for being basic and using Ubuntu, this tutorial is not for you. The sort of person with a preferred Linux distro is the sort of person who can do this sort of thing in their sleep. Also I don't care. This tutorial is intended for the average home computer user. This is also why we’re not using a more exotic home server solution like running everything through Docker Containers and managing it through a dashboard like Homarr or Heimdall. While such solutions are fantastic and can be very easy to maintain once you have it all set up, wrapping your brain around Docker is a whole thing in and of itself. If you do follow this tutorial and had fun putting everything together, then I would encourage you to return in a year’s time, do your research and set up everything with Docker Containers.
Lastly, this is a tutorial aimed at Windows users. Although I was a daily user of OS X for many years (roughly 2008-2023) and I've dabbled quite a bit with various Linux distributions (mostly Ubuntu and Manjaro), my primary OS these days is Windows 11. Many things in this tutorial will still be applicable to Mac users, but others (e.g. setting up shares) you will have to look up for yourself. I doubt it would be difficult to do so.
Nothing in this tutorial will require feats of computing expertise. All you will need is a basic computer literacy (i.e. an understanding of what a filesystem and directory are, and a degree of comfort in the settings menu) and a willingness to learn a thing or two. While this guide may look overwhelming at first glance, it is only because I want to be as thorough as possible. I want you to understand exactly what it is you're doing, I don't want you to just blindly follow steps. If you half-way know what you’re doing, you will be much better prepared if you ever need to troubleshoot.
Honestly, once you have all the hardware ready it shouldn't take more than an afternoon or two to get everything up and running.
(This tutorial is just shy of seven thousand words long so the rest is under the cut.)
Step One: Choosing Your Hardware
Linux is a light weight operating system, depending on the distribution there's close to no bloat. There are recent distributions available at this very moment that will run perfectly fine on a fourteen year old i3 with 4GB of RAM. Moreover, running Plex or Jellyfin isn’t resource intensive in 90% of use cases. All this is to say, we don’t require an expensive or powerful computer. This means that there are several options available: 1) use an old computer you already have sitting around but aren't using 2) buy a used workstation from eBay, or what I believe to be the best option, 3) order an N100 Mini-PC from AliExpress or Amazon.
Note: If you already have an old PC sitting around that you’ve decided to use, fantastic, move on to the next step.
When weighing your options, keep a few things in mind: the number of people you expect to be streaming simultaneously at any one time, the resolution and bitrate of your media library (4k video takes a lot more processing power than 1080p) and most importantly, how many of those clients are going to be transcoding at any one time. Transcoding is what happens when the playback device does not natively support direct playback of the source file. This can happen for a number of reasons, such as the playback device's native resolution being lower than the file's internal resolution, or because the source file was encoded in a video codec unsupported by the playback device.
Ideally we want any transcoding to be performed by hardware. This means we should be looking for a computer with an Intel processor with Quick Sync. Quick Sync is a dedicated core on the CPU die designed specifically for video encoding and decoding. This specialized hardware makes for highly efficient transcoding both in terms of processing overhead and power draw. Without these Quick Sync cores, transcoding must be brute forced through software. This takes up much more of a CPU’s processing power and requires much more energy. But not all Quick Sync cores are created equal and you need to keep this in mind if you've decided either to use an old computer or to shop for a used workstation on eBay
Any Intel processor from second generation Core (Sandy Bridge circa 2011) onward has Quick Sync cores. It's not until 6th gen (Skylake), however, that the cores support the H.265 HEVC codec. Intel’s 10th gen (Comet Lake) processors introduce support for 10bit HEVC and HDR tone mapping. And the recent 12th gen (Alder Lake) processors brought with them hardware AV1 decoding. As an example, while an 8th gen (Kaby Lake) i5-8500 will be able to hardware transcode a H.265 encoded file, it will fall back to software transcoding if given a 10bit H.265 file. If you’ve decided to use that old PC or to look on eBay for an old Dell Optiplex keep this in mind.
Note 1: The price of old workstations varies wildly and fluctuates frequently. If you get lucky and go shopping shortly after a workplace has liquidated a large number of their workstations you can find deals for as low as $100 on a barebones system, but generally an i5-8500 workstation with 16gb RAM will cost you somewhere in the area of $260 CAD/$200 USD.
Note 2: The AMD equivalent to Quick Sync is called Video Core Next, and while it's fine, it's not as efficient and not as mature a technology. It was only introduced with the first generation Ryzen CPUs and it only got decent with their newest CPUs, we want something cheap.
Alternatively you could forgo having to keep track of what generation of CPU is equipped with Quick Sync cores that feature support for which codecs, and just buy an N100 mini-PC. For around the same price or less of a used workstation you can pick up a mini-PC with an Intel N100 processor. The N100 is a four-core processor based on the 12th gen Alder Lake architecture and comes equipped with the latest revision of the Quick Sync cores. These little processors offer astounding hardware transcoding capabilities for their size and power draw. Otherwise they perform equivalent to an i5-6500, which isn't a terrible CPU. A friend of mine uses an N100 machine as a dedicated retro emulation gaming system and it does everything up to 6th generation consoles just fine. The N100 is also a remarkably efficient chip, it sips power. In fact, the difference between running one of these and an old workstation could work out to hundreds of dollars a year in energy bills depending on where you live.
You can find these Mini-PCs all over Amazon or for a little cheaper on AliExpress. They range in price from $170 CAD/$125 USD for a no name N100 with 8GB RAM to $280 CAD/$200 USD for a Beelink S12 Pro with 16GB RAM. The brand doesn't really matter, they're all coming from the same three factories in Shenzen, go for whichever one fits your budget or has features you want. 8GB RAM should be enough, Linux is lightweight and Plex only calls for 2GB RAM. 16GB RAM might result in a slightly snappier experience, especially with ZFS. A 256GB SSD is more than enough for what we need as a boot drive, but going for a bigger drive might allow you to get away with things like creating preview thumbnails for Plex, but it’s up to you and your budget.
The Mini-PC I wound up buying was a Firebat AK2 Plus with 8GB RAM and a 256GB SSD. It looks like this:
Note: Be forewarned that if you decide to order a Mini-PC from AliExpress, note the type of power adapter it ships with. The mini-PC I bought came with an EU power adapter and I had to supply my own North American power supply. Thankfully this is a minor issue as barrel plug 30W/12V/2.5A power adapters are easy to find and can be had for $10.
Step Two: Choosing Your Storage
Storage is the most important part of our build. It is also the most expensive. Thankfully it’s also the most easily upgrade-able down the line.
For people with a smaller media collection (4TB to 8TB), a more limited budget, or who will only ever have two simultaneous streams running, I would say that the most economical course of action would be to buy a USB 3.0 8TB external HDD. Something like this one from Western Digital or this one from Seagate. One of these external drives will cost you in the area of $200 CAD/$140 USD. Down the line you could add a second external drive or replace it with a multi-drive RAIDz set up such as detailed below.
If a single external drive the path for you, move on to step three.
For people with larger media libraries (12TB+), who prefer media in 4k, or care who about data redundancy, the answer is a RAID array featuring multiple HDDs in an enclosure.
Note: If you are using an old PC or used workstatiom as your server and have the room for at least three 3.5" drives, and as many open SATA ports on your mother board you won't need an enclosure, just install the drives into the case. If your old computer is a laptop or doesn’t have room for more internal drives, then I would suggest an enclosure.
The minimum number of drives needed to run a RAIDz array is three, and seeing as RAIDz is what we will be using, you should be looking for an enclosure with three to five bays. I think that four disks makes for a good compromise for a home server. Regardless of whether you go for a three, four, or five bay enclosure, do be aware that in a RAIDz array the space equivalent of one of the drives will be dedicated to parity at a ratio expressed by the equation 1 − 1/n i.e. in a four bay enclosure equipped with four 12TB drives, if we configured our drives in a RAIDz1 array we would be left with a total of 36TB of usable space (48TB raw size). The reason for why we might sacrifice storage space in such a manner will be explained in the next section.
A four bay enclosure will cost somewhere in the area of $200 CDN/$140 USD. You don't need anything fancy, we don't need anything with hardware RAID controls (RAIDz is done entirely in software) or even USB-C. An enclosure with USB 3.0 will perform perfectly fine. Don’t worry too much about USB speed bottlenecks. A mechanical HDD will be limited by the speed of its mechanism long before before it will be limited by the speed of a USB connection. I've seen decent looking enclosures from TerraMaster, Yottamaster, Mediasonic and Sabrent.
When it comes to selecting the drives, as of this writing, the best value (dollar per gigabyte) are those in the range of 12TB to 20TB. I settled on 12TB drives myself. If 12TB to 20TB drives are out of your budget, go with what you can afford, or look into refurbished drives. I'm not sold on the idea of refurbished drives but many people swear by them.
When shopping for harddrives, search for drives designed specifically for NAS use. Drives designed for NAS use typically have better vibration dampening and are designed to be active 24/7. They will also often make use of CMR (conventional magnetic recording) as opposed to SMR (shingled magnetic recording). This nets them a sizable read/write performance bump over typical desktop drives. Seagate Ironwolf and Toshiba NAS are both well regarded brands when it comes to NAS drives. I would avoid Western Digital Red drives at this time. WD Reds were a go to recommendation up until earlier this year when it was revealed that they feature firmware that will throw up false SMART warnings telling you to replace the drive at the three year mark quite often when there is nothing at all wrong with that drive. It will likely even be good for another six, seven, or more years.
Step Three: Installing Linux
For this step you will need a USB thumbdrive of at least 6GB in capacity, an .ISO of Ubuntu, and a way to make that thumbdrive bootable media.
First download a copy of Ubuntu desktop (for best performance we could download the Server release, but for new Linux users I would recommend against the server release. The server release is strictly command line interface only, and having a GUI is very helpful for most people. Not many people are wholly comfortable doing everything through the command line, I'm certainly not one of them, and I grew up with DOS 6.0. 22.04.3 Jammy Jellyfish is the current Long Term Service release, this is the one to get.
Download the .ISO and then download and install balenaEtcher on your Windows PC. BalenaEtcher is an easy to use program for creating bootable media, you simply insert your thumbdrive, select the .ISO you just downloaded, and it will create a bootable installation media for you.
Once you've made a bootable media and you've got your Mini-PC (or you old PC/used workstation) in front of you, hook it directly into your router with an ethernet cable, and then plug in the HDD enclosure, a monitor, a mouse and a keyboard. Now turn that sucker on and hit whatever key gets you into the BIOS (typically ESC, DEL or F2). If you’re using a Mini-PC check to make sure that the P1 and P2 power limits are set correctly, my N100's P1 limit was set at 10W, a full 20W under the chip's power limit. Also make sure that the RAM is running at the advertised speed. My Mini-PC’s RAM was set at 2333Mhz out of the box when it should have been 3200Mhz. Once you’ve done that, key over to the boot order and place the USB drive first in the boot order. Then save the BIOS settings and restart.
After you restart you’ll be greeted by Ubuntu's installation screen. Installing Ubuntu is really straight forward, select the "minimal" installation option, as we won't need anything on this computer except for a browser (Ubuntu comes preinstalled with Firefox) and Plex Media Server/Jellyfin Media Server. Also remember to delete and reformat that Windows partition! We don't need it.
Step Four: Installing ZFS and Setting Up the RAIDz Array
Note: If you opted for just a single external HDD skip this step and move onto setting up a Samba share.
Once Ubuntu is installed it's time to configure our storage by installing ZFS to build our RAIDz array. ZFS is a "next-gen" file system that is both massively flexible and massively complex. It's capable of snapshot backup, self healing error correction, ZFS pools can be configured with drives operating in a supplemental manner alongside the storage vdev (e.g. fast cache, dedicated secondary intent log, hot swap spares etc.). It's also a file system very amenable to fine tuning. Block and sector size are adjustable to use case and you're afforded the option of different methods of inline compression. If you'd like a very detailed overview and explanation of its various features and tips on tuning a ZFS array check out these articles from Ars Technica. For now we're going to ignore all these features and keep it simple, we're going to pull our drives together into a single vdev running in RAIDz which will be the entirety of our zpool, no fancy cache drive or SLOG.
Open up the terminal and type the following commands:
sudo apt update
then
sudo apt install zfsutils-linux
This will install the ZFS utility. Verify that it's installed with the following command:
zfs --version
Now, it's time to check that the HDDs we have in the enclosure are healthy, running, and recognized. We also want to find out their device IDs and take note of them:
sudo fdisk -1
Note: You might be wondering why some of these commands require "sudo" in front of them while others don't. "Sudo" is short for "super user do”. When and where "sudo" is used has to do with the way permissions are set up in Linux. Only the "root" user has the access level to perform certain tasks in Linux. As a matter of security and safety regular user accounts are kept separate from the "root" user. It's not advised (or even possible) to boot into Linux as "root" with most modern distributions. Instead by using "sudo" our regular user account is temporarily given the power to do otherwise forbidden things. Don't worry about it too much at this stage, but if you want to know more check out this introduction.
If everything is working you should get a list of the various drives detected along with their device IDs which will look like this: /dev/sdc. You can also check the device IDs of the drives by opening the disk utility app. Jot these IDs down as we'll need them for our next step, creating our RAIDz array.
RAIDz is similar to RAID-5 in that instead of striping your data over multiple disks, exchanging redundancy for speed and available space (RAID-0), or mirroring your data writing by two copies of every piece (RAID-1), it instead writes parity blocks across the disks in addition to striping, this provides a balance of speed, redundancy and available space. If a single drive fails, the parity blocks on the working drives can be used to reconstruct the entire array as soon as a replacement drive is added.
Additionally, RAIDz improves over some of the common RAID-5 flaws. It's more resilient and capable of self healing, as it is capable of automatically checking for errors against a checksum. It's more forgiving in this way, and it's likely that you'll be able to detect when a drive is dying well before it fails. A RAIDz array can survive the loss of any one drive.
Note: While RAIDz is indeed resilient, if a second drive fails during the rebuild, you're fucked. Always keep backups of things you can't afford to lose. This tutorial, however, is not about proper data safety.
To create the pool, use the following command:
sudo zpool create "zpoolnamehere" raidz "device IDs of drives we're putting in the pool"
For example, let's creatively name our zpool "mypool". This poil will consist of four drives which have the device IDs: sdb, sdc, sdd, and sde. The resulting command will look like this:
sudo zpool create mypool raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde
If as an example you bought five HDDs and decided you wanted more redundancy dedicating two drive to this purpose, we would modify the command to "raidz2" and the command would look something like the following:
sudo zpool create mypool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
An array configured like this is known as RAIDz2 and is able to survive two disk failures.
Once the zpool has been created, we can check its status with the command:
zpool status
Or more concisely with:
zpool list
The nice thing about ZFS as a file system is that a pool is ready to go immediately after creation. If we were to set up a traditional RAID-5 array using mbam, we'd have to sit through a potentially hours long process of reformatting and partitioning the drives. Instead we're ready to go right out the gates.
The zpool should be automatically mounted to the filesystem after creation, check on that with the following:
df -hT | grep zfs
Note: If your computer ever loses power suddenly, say in event of a power outage, you may have to re-import your pool. In most cases, ZFS will automatically import and mount your pool, but if it doesn’t and you can't see your array, simply open the terminal and type sudo zpool import -a.
By default a zpool is mounted at /"zpoolname". The pool should be under our ownership but let's make sure with the following command:
sudo chown -R "yourlinuxusername" /"zpoolname"
Note: Changing file and folder ownership with "chown" and file and folder permissions with "chmod" are essential commands for much of the admin work in Linux, but we won't be dealing with them extensively in this guide. If you'd like a deeper tutorial and explanation you can check out these two guides: chown and chmod.
You can access the zpool file system through the GUI by opening the file manager (the Ubuntu default file manager is called Nautilus) and clicking on "Other Locations" on the sidebar, then entering the Ubuntu file system and looking for a folder with your pool's name. Bookmark the folder on the sidebar for easy access.
Your storage pool is now ready to go. Assuming that we already have some files on our Windows PC we want to copy to over, we're going to need to install and configure Samba to make the pool accessible in Windows.
Step Five: Setting Up Samba/Sharing
Samba is what's going to let us share the zpool with Windows and allow us to write to it from our Windows machine. First let's install Samba with the following commands:
sudo apt-get update
then
sudo apt-get install samba
Next create a password for Samba.
sudo smbpswd -a "yourlinuxusername"
It will then prompt you to create a password. Just reuse your Ubuntu user password for simplicity's sake.
Note: if you're using just a single external drive replace the zpool location in the following commands with wherever it is your external drive is mounted, for more information see this guide on mounting an external drive in Ubuntu.
After you've created a password we're going to create a shareable folder in our pool with this command
mkdir /"zpoolname"/"foldername"
Now we're going to open the smb.conf file and make that folder shareable. Enter the following command.
sudo nano /etc/samba/smb.conf
This will open the .conf file in nano, the terminal text editor program. Now at the end of smb.conf add the following entry:
["foldername"]
path = /"zpoolname"/"foldername"
available = yes
valid users = "yourlinuxusername"
read only = no
writable = yes
browseable = yes
guest ok = no
Ensure that there are no line breaks between the lines and that there's a space on both sides of the equals sign. Our next step is to allow Samba traffic through the firewall:
sudo ufw allow samba
Finally restart the Samba service:
sudo systemctl restart smbd
At this point we'll be able to access to the pool, browse its contents, and read and write to it from Windows. But there's one more thing left to do, Windows doesn't natively support the ZFS file systems and will read the used/available/total space in the pool incorrectly. Windows will read available space as total drive space, and all used space as null. This leads to Windows only displaying a dwindling amount of "available" space as the drives are filled. We can fix this! Functionally this doesn't actually matter, we can still write and read to and from the disk, it just makes it difficult to tell at a glance the proportion of used/available space, so this is an optional step but one I recommend (this step is also unnecessary if you're just using a single external drive). What we're going to do is write a little shell script in #bash. Open nano with the terminal with the command:
nano
Now insert the following code:
#!/bin/bash CUR_PATH=`pwd` ZFS_CHECK_OUTPUT=$(zfs get type $CUR_PATH 2>&1 > /dev/null) > /dev/null if [[ $ZFS_CHECK_OUTPUT == *not\ a\ ZFS* ]] then IS_ZFS=false else IS_ZFS=true fi if [[ $IS_ZFS = false ]] then df $CUR_PATH | tail -1 | awk '{print $2" "$4}' else USED=$((`zfs get -o value -Hp used $CUR_PATH` / 1024)) > /dev/null AVAIL=$((`zfs get -o value -Hp available $CUR_PATH` / 1024)) > /dev/null TOTAL=$(($USED+$AVAIL)) > /dev/null echo $TOTAL $AVAIL fi
Save the script as "dfree.sh" to /home/"yourlinuxusername" then change the ownership of the file to make it executable with this command:
sudo chmod 774 dfree.sh
Now open smb.conf with sudo again:
sudo nano /etc/samba/smb.conf
Now add this entry to the top of the configuration file to direct Samba to use the results of our script when Windows asks for a reading on the pool's used/available/total drive space:
[global]
dfree command = /home/"yourlinuxusername"/dfree.sh
Save the changes to smb.conf and then restart Samba again with the terminal:
sudo systemctl restart smbd
Now there’s one more thing we need to do to fully set up the Samba share, and that’s to modify a hidden group permission. In the terminal window type the following command:
usermod -a -G sambashare “yourlinuxusername”
Then restart samba again:
sudo systemctl restart smbd
If we don’t do this last step, everything will appear to work fine, and you will even be able to see and map the drive from Windows and even begin transferring files, but you'd soon run into a lot of frustration. As every ten minutes or so a file would fail to transfer and you would get a window announcing “0x8007003B Unexpected Network Error”. This window would require your manual input to continue the transfer with the file next in the queue. And at the end it would reattempt to transfer whichever files failed the first time around. 99% of the time they’ll go through that second try, but this is still all a major pain in the ass. Especially if you’ve got a lot of data to transfer or you want to step away from the computer for a while.
It turns out samba can act a little weirdly with the higher read/write speeds of RAIDz arrays and transfers from Windows, and will intermittently crash and restart itself if this group option isn’t changed. Inputting the above command will prevent you from ever seeing that window.
The last thing we're going to do before switching over to our Windows PC is grab the IP address of our Linux machine. Enter the following command:
hostname -I
This will spit out this computer's IP address on the local network (it will look something like 192.168.0.x), write it down. It might be a good idea once you're done here to go into your router settings and reserving that IP for your Linux system in the DHCP settings. Check the manual for your specific model router on how to access its settings, typically it can be accessed by opening a browser and typing http:\\192.168.0.1 in the address bar, but your router may be different.
Okay we’re done with our Linux computer for now. Get on over to your Windows PC, open File Explorer, right click on Network and click "Map network drive". Select Z: as the drive letter (you don't want to map the network drive to a letter you could conceivably be using for other purposes) and enter the IP of your Linux machine and location of the share like so: \\"LINUXCOMPUTERLOCALIPADDRESSGOESHERE"\"zpoolnamegoeshere"\. Windows will then ask you for your username and password, enter the ones you set earlier in Samba and you're good. If you've done everything right it should look something like this:
You can now start moving media over from Windows to the share folder. It's a good idea to have a hard line running to all machines. Moving files over Wi-Fi is going to be tortuously slow, the only thing that’s going to make the transfer time tolerable (hours instead of days) is a solid wired connection between both machines and your router.
Step Six: Setting Up Remote Desktop Access to Your Server
After the server is up and going, you’ll want to be able to access it remotely from Windows. Barring serious maintenance/updates, this is how you'll access it most of the time. On your Linux system open the terminal and enter:
sudo apt install xrdp
Then:
sudo systemctl enable xrdp
Once it's finished installing, open “Settings” on the sidebar and turn off "automatic login" in the User category. Then log out of your account. Attempting to remotely connect to your Linux computer while you’re logged in will result in a black screen!
Now get back on your Windows PC, open search and look for "RDP". A program called "Remote Desktop Connection" should pop up, open this program as an administrator by right-clicking and selecting “run as an administrator”. You’ll be greeted with a window. In the field marked “Computer” type in the IP address of your Linux computer. Press connect and you'll be greeted with a new window and prompt asking for your username and password. Enter your Ubuntu username and password here.
If everything went right, you’ll be logged into your Linux computer. If the performance is sluggish, adjust the display options. Lowering the resolution and colour depth do a lot to make the interface feel snappier.
Remote access is how we're going to be using our Linux system from now, barring edge cases like needing to get into the BIOS or upgrading to a new version of Ubuntu. Everything else from performing maintenance like a monthly zpool scrub to checking zpool status and updating software can all be done remotely.
This is how my server lives its life now, happily humming and chirping away on the floor next to the couch in a corner of the living room.
Step Seven: Plex Media Server/Jellyfin
Okay we’ve got all the ground work finished and our server is almost up and running. We’ve got Ubuntu up and running, our storage array is primed, we’ve set up remote connections and sharing, and maybe we’ve moved over some of favourite movies and TV shows.
Now we need to decide on the media server software to use which will stream our media to us and organize our library. For most people I’d recommend Plex. It just works 99% of the time. That said, Jellyfin has a lot to recommend it by too, even if it is rougher around the edges. Some people run both simultaneously, it’s not that big of an extra strain. I do recommend doing a little bit of your own research into the features each platform offers, but as a quick run down, consider some of the following points:
Plex is closed source and is funded through PlexPass purchases while Jellyfin is open source and entirely user driven. This means a number of things: for one, Plex requires you to purchase a “PlexPass” (purchased as a one time lifetime fee $159.99 CDN/$120 USD or paid for on a monthly or yearly subscription basis) in order to access to certain features, like hardware transcoding (and we want hardware transcoding) or automated intro/credits detection and skipping, Jellyfin offers some of these features for free through plugins. Plex supports a lot more devices than Jellyfin and updates more frequently. That said, Jellyfin's Android and iOS apps are completely free, while the Plex Android and iOS apps must be activated for a one time cost of $6 CDN/$5 USD. But that $6 fee gets you a mobile app that is much more functional and features a unified UI across platforms, the Plex mobile apps are simply a more polished experience. The Jellyfin apps are a bit of a mess and the iOS and Android versions are very different from each other.
Jellyfin’s actual media player is more fully featured than Plex's, but on the other hand Jellyfin's UI, library customization and automatic media tagging really pale in comparison to Plex. Streaming your music library is free through both Jellyfin and Plex, but Plex offers the PlexAmp app for dedicated music streaming which boasts a number of fantastic features, unfortunately some of those fantastic features require a PlexPass. If your internet is down, Jellyfin can still do local streaming, while Plex can fail to play files unless you've got it set up a certain way. Jellyfin has a slew of neat niche features like support for Comic Book libraries with the .cbz/.cbt file types, but then Plex offers some free ad-supported TV and films, they even have a free channel that plays nothing but Classic Doctor Who.
Ultimately it's up to you, I settled on Plex because although some features are pay-walled, it just works. It's more reliable and easier to use, and a one-time fee is much easier to swallow than a subscription. I had a pretty easy time getting my boomer parents and tech illiterate brother introduced to and using Plex and I don't know if I would've had as easy a time doing that with Jellyfin. I do also need to mention that Jellyfin does take a little extra bit of tinkering to get going in Ubuntu, you’ll have to set up process permissions, so if you're more tolerant to tinkering, Jellyfin might be up your alley and I’ll trust that you can follow their installation and configuration guide. For everyone else, I recommend Plex.
So pick your poison: Plex or Jellyfin.
Note: The easiest way to download and install either of these packages in Ubuntu is through Snap Store.
After you've installed one (or both), opening either app will launch a browser window into the browser version of the app allowing you to set all the options server side.
The process of adding creating media libraries is essentially the same in both Plex and Jellyfin. You create a separate libraries for Television, Movies, and Music and add the folders which contain the respective types of media to their respective libraries. The only difficult or time consuming aspect is ensuring that your files and folders follow the appropriate naming conventions:
Plex naming guide for Movies
Plex naming guide for Television
Jellyfin follows the same naming rules but I find their media scanner to be a lot less accurate and forgiving than Plex. Once you've selected the folders to be scanned the service will scan your files, tagging everything and adding metadata. Although I find do find Plex more accurate, it can still erroneously tag some things and you might have to manually clean up some tags in a large library. (When I initially created my library it tagged the 1963-1989 Doctor Who as some Korean soap opera and I needed to manually select the correct match after which everything was tagged normally.) It can also be a bit testy with anime (especially OVAs) be sure to check TVDB to ensure that you have your files and folders structured and named correctly. If something is not showing up at all, double check the name.
Once that's done, organizing and customizing your library is easy. You can set up collections, grouping items together to fit a theme or collect together all the entries in a franchise. You can make playlists, and add custom artwork to entries. It's fun setting up collections with posters to match, there are even several websites dedicated to help you do this like PosterDB. As an example, below are two collections in my library, one collecting all the entries in a franchise, the other follows a theme.
My Star Trek collection, featuring all eleven television series, and thirteen films.
My Best of the Worst collection, featuring sixty-nine films previously showcased on RedLetterMedia’s Best of the Worst. They’re all absolutely terrible and I love them.
As for settings, ensure you've got Remote Access going, it should work automatically and be sure to set your upload speed after running a speed test. In the library settings set the database cache to 2000MB to ensure a snappier and more responsive browsing experience, and then check that playback quality is set to original/maximum. If you’re severely bandwidth limited on your upload and have remote users, you might want to limit the remote stream bitrate to something more reasonable, just as a note of comparison Netflix’s 1080p bitrate is approximately 5Mbps, although almost anyone watching through a chromium based browser is streaming at 720p and 3mbps. Other than that you should be good to go. For actually playing your files, there's a Plex app for just about every platform imaginable. I mostly watch television and films on my laptop using the Windows Plex app, but I also use the Android app which can broadcast to the chromecast connected to the TV in the office and the Android TV app for our smart TV. Both are fully functional and easy to navigate, and I can also attest to the OS X version being equally functional.
Part Eight: Finding Media
Now, this is not really a piracy tutorial, there are plenty of those out there. But if you’re unaware, BitTorrent is free and pretty easy to use, just pick a client (qBittorrent is the best) and go find some public trackers to peruse. Just know now that all the best trackers are private and invite only, and that they can be exceptionally difficult to get into. I’m already on a few, and even then, some of the best ones are wholly out of my reach.
If you decide to take the left hand path and turn to Usenet you’ll have to pay. First you’ll need to sign up with a provider like Newshosting or EasyNews for access to Usenet itself, and then to actually find anything you’re going to need to sign up with an indexer like NZBGeek or NZBFinder. There are dozens of indexers, and many people cross post between them, but for more obscure media it’s worth checking multiple. You’ll also need a binary downloader like SABnzbd. That caveat aside, Usenet is faster, bigger, older, less traceable than BitTorrent, and altogether slicker. I honestly prefer it, and I'm kicking myself for taking this long to start using it because I was scared off by the price. I’ve found so many things on Usenet that I had sought in vain elsewhere for years, like a 2010 Italian film about a massacre perpetrated by the SS that played the festival circuit but never received a home media release; some absolute hero uploaded a rip of a festival screener DVD to Usenet. Anyway, figure out the rest of this shit on your own and remember to use protection, get yourself behind a VPN, use a SOCKS5 proxy with your BitTorrent client, etc.
On the legal side of things, if you’re around my age, you (or your family) probably have a big pile of DVDs and Blu-Rays sitting around unwatched and half forgotten. Why not do a bit of amateur media preservation, rip them and upload them to your server for easier access? (Your tools for this are going to be Handbrake to do the ripping and AnyDVD to break any encryption.) I went to the trouble of ripping all my SCTV DVDs (five box sets worth) because none of it is on streaming nor could it be found on any pirate source I tried. I’m glad I did, forty years on it’s still one of the funniest shows to ever be on TV.
Part Nine/Epilogue: Sonarr/Radarr/Lidarr and Overseerr
There are a lot of ways to automate your server for better functionality or to add features you and other users might find useful. Sonarr, Radarr, and Lidarr are a part of a suite of “Servarr” services (there’s also Readarr for books and Whisparr for adult content) that allow you to automate the collection of new episodes of TV shows (Sonarr), new movie releases (Radarr) and music releases (Lidarr). They hook in to your BitTorrent client or Usenet binary newsgroup downloader and crawl your preferred Torrent trackers and Usenet indexers, alerting you to new releases and automatically grabbing them. You can also use these services to manually search for new media, and even replace/upgrade your existing media with better quality uploads. They’re really a little tricky to set up on a bare metal Ubuntu install (ideally you should be running them in Docker Containers), and I won’t be providing a step by step on installing and running them, I’m simply making you aware of their existence.
The other bit of kit I want to make you aware of is Overseerr which is a program that scans your Plex media library and will serve recommendations based on what you like. It also allows you and your users to request specific media. It can even be integrated with Sonarr/Radarr/Lidarr so that fulfilling those requests is fully automated.
And you're done. It really wasn't all that hard. Enjoy your media. Enjoy the control you have over that media. And be safe in the knowledge that no hedgefund CEO motherfucker who hates the movies but who is somehow in control of a major studio will be able to disappear anything in your library as a tax write-off.
1K notes
·
View notes
Text
Installing Linux (Mint) as a Non-Techy Person
I've wanted Linux for various reasons since college. I tried it once when I no longer had to worry about having specific programs for school, but it did not go well. It was a dedicated PC that was, I believe, poorly made. Anyway.
In the process of deGoogling and deWindows365'ing, I started to think about Linux again. Here is my experience.
Pre-Work: Take Stock
List out the programs you use regularly and those you need. Look up whether or not they work on Linux. For those that don't, look up alternatives.
If the alternative works on Windows/Mac, try it out first.
Make sure you have your files backed up somewhere.
Also, pick up a 5GB minimum USB drive.
Oh and make a system restore point (look it up in your Start menu) and back-up your files.
Step One: Choose a Distro
Dear god do Linux people like to talk about distros. Basically, from what all I've read, if you don't want to fuss a lot with your OS, you've got two options: Ubuntu and Linux Mint. Ubuntu is better known and run by a company called Canonical. Linux Mint is run by a small team and paid for via donations.
I chose Linux Mint. Some of the stuff I read about Ubuntu reminded me too much of my reasons for wanting to leave Windows, basically. Did I second-guess this a half-dozen times? Yes, yes I did.
The rest of this is true for Linux Mint Cinnamon only.
Step Two: Make your Flash Drive
Linux Mint has great instructions. For the most part they work.
Start here:
The trickiest part of creating the flash drive is verifying and authenticating it.
On the same page that you download the Linux .iso file there are two links. Right click+save as both of those files to your computer. I saved them and the .iso file all to my Downloads folder.
Then, once you get to the 'Verify your ISO image' page in their guide and you're on Windows like me, skip down to this link about verifying on Windows.
Once it is verified, you can go back to the Linux Mint guide. They'll direct you to download Etchr and use that to create your flash drive.
If this step is too tricky, then please reconsider Linux. Subsequent steps are both easier and trickier.
Step Three: Restart from your Flash Drive
This is the step where I nearly gave up. The guide is still great, except it doesn't mention certain security features that make installing Linux Mint impossible without extra steps.
(1) Look up your Bitlocker recovery key and have it handy.
I don't know if you'll need it like I did (I did not turn off Bitlocker at first), but better to be safe.
(2) Turn off Bitlocker.
(3) Restart. When on the title screen, press your Bios key. There might be more than one. On a Lenovo, pressing F1 several times gets you to the relevant menu. This is not the menu you'll need to install, though. Turn off "Secure Boot."
(4) Restart. This time press F12 (on a Lenovo). The HDD option, iirc, is your USB. Look it up on your phone to be sure.
Now you can return to the Linux Mint instructions.
Figuring this out via trial-and-error was not fun.
Step Four: Install Mint
Just follow the prompts. I chose to do the dual boot.
You will have to click through some scary messages about irrevocable changes. This is your last chance to change your mind.
I chose the dual boot because I may not have anticipated everything I'll need from Windows. My goal is to work primarily in Linux. Then, in a few months, if it is working, I'll look up the steps for making my machine Linux only.
Some Notes on Linux Mint
Some of the minor things I looked up ahead of time and other miscellany:
(1) HP Printers supposedly play nice with Linux. I have not tested this yet.
(2) Linux Mint can easily access your Windows files. I've read that this does not go both ways. I've not tested it yet.
(3) You can move the taskbar (panel in LM) to the left side of your screen.
(4) You are going to have to download your key programs again.
(5) The LM software manager has most programs, but not all. Some you'll have to download from websites. Follow instructions. If a file leads to a scary wall of strange text, close it and just do the Terminal instructions instead.
(6) The software manager also has fonts. I was able to get Fanwood (my favorite serif) and JetBrains (my favorite mono) easily.
In the end, be prepared for something to go wrong. Just trust that you are not the first person to ever experience the issue and look it up. If that doesn't help, you can always ask. The forums and reddit community both look active.
178 notes
·
View notes
Text
the op of that "you should restart your computer every few days" post blocked me so i'm going to perform the full hater move of writing my own post to explain why he's wrong
why should you listen to me: took operating system design and a "how to go from transistors to a pipelined CPU" class in college, i have several servers (one physical, four virtual) that i maintain, i use nixos which is the linux distribution for people who are even bigger fucking nerds about computers than the typical linux user. i also ran this past the other people i know that are similarly tech competent and they also agreed OP is wrong (haven't run this post by them but nothing i say here is controversial).
anyway the tl;dr here is:
you don't need to shut down or restart your computer unless something is wrong or you need to install updates
i think this misconception that restarting is necessary comes from the fact that restarting often fixes problems, and so people think that the problems are because of the not restarting. this is, generally, not true. in most cases there's some specific program (or part of the operating system) that's gotten into a bad state, and restarting that one program would fix it. but restarting is easier since you don't have to identify specifically what's gone wrong. the most common problem i can think of that wouldn't fall under this category is your graphics card drivers fucking up; that's not something you can easily reinitialize without restarting the entire OS.
this isn't saying that restarting is a bad step; if you don't want to bother trying to figure out the problem, it's not a bad first go. personally, if something goes wrong i like to try to solve it without a restart, but i also know way, way more about computers than most people.
as more evidence to point to this, i would point out that servers are typically not restarted unless there's a specific need. this is not because they run special operating systems or have special parts; people can and do run servers using commodity consumer hardware, and while linux is much more common in the server world, it doesn't have any special features to make it more capable of long operation. my server with the longest uptime is 9 months, and i'd have one with even more uptime than that if i hadn't fucked it up so bad two months ago i had to restore from a full disk backup. the laptop i'm typing this on has about a month of uptime (including time spent in sleep mode). i've had servers with uptimes measuring in years.
there's also a lot of people that think that the parts being at an elevated temperature just from running is harmful. this is also, in general, not true. i'd be worried about running it at 100% full blast CPU/GPU for months on end, but nobody reading this post is doing that.
the other reason i see a lot is energy use. the typical energy use of a computer not doing anything is like... 20-30 watts. this is about two or three lightbulbs worth. that's not nothing, but it's not a lot to be concerned over. in terms of monetary cost, that's maybe $10 on your power bill. if it's in sleep mode it's even less, and if it's in full-blown hibernation mode it's literally zero.
there are also people in the replies to that post giving reasons. all of them are false.
temporary files generally don't use enough disk space to be worth worrying about
programs that leak memory return it all to the OS when they're closed, so it's enough to just close the program itself. and the OS generally doesn't leak memory.
'clearing your RAM' is not a thing you need to do. neither is resetting your registry values.
your computer can absolutely use disk space from deleted files without a restart. i've taken a server that was almost completely full, deleted a bunch of unnecessary files, and it continued fine without a restart.
1K notes
·
View notes
Text
in wake of yet another wave of people being turned off by windows, here's a guide on how to dual boot windows and 🐧 linux 🐧 (useful for when you're not sure if you wanna make the switch and just wanna experiment with the OS for a bit!)
if you look up followup guides online you're gonna see that people are telling you to use ubuntu but i am gonna show you how to do this using kubuntu instead because fuck GNOME. all my homies hate GNOME.

i'm just kidding, use whatever distro you like. my favorite's kubuntu (for a beginner home environment). read up on the others if you're curious. and don't let some rando on reddit tell you that you need pop! OS for gaming. gaming on linux is possible without it.
why kubuntu?
- it's very user friendly
- it comes with applications people might already be familiar with (VLC player and firefox for example)
- libreoffice already preinstalled
- no GNOME (sorry GNOME enthusiasts, let me old man yell at the clouds) (also i'm playing this up for the laughs. wholesome kde/gnome meme at the bottom of this post.)
for people who are interested in this beyond my tl;dr: read this
(if you're a linux user, don't expect any tech wizardry here. i know there's a billion other and arguably better ways to do x y and/or z. what i'm trying to do here is to keep these instructions previous windows user friendly. point and click. no CLI bro, it'll scare the less tech savvy hoes. no vim supremacy talk (although hell yeah vim supremacy). if they like the OS they'll figure out bash all by themselves in no time.)
first of all, there'll be a GUI. you don't need to type lines of code to get this all running. we're not going for the ✨hackerman aesthetics✨ today. grab a mouse and a keyboard and you're good to go.
what you need is a computer/laptop/etc with enough disk space to install both windows and linux on it. i'm recommending to reserve at least a 100gb for the both of them. in the process of this you'll learn how to re-allocate disk space either way and you'll learn how to give and take some, we'll do a bit of disk partitioning to fit them both on a single disk.
and that's enough babbling for now, let's get to the actual tutorial:
🚨IMPORTANT. DO NOT ATTEMPT THIS ON A 32BIT SYSTEM. ONLY DO THIS IF YOU'RE WORKING WITH A 64BIT SYSTEM. 🚨 (win10 and win11: settings -> system -> about -> device specifications -> system type ) it should say 64bit operating system, x64-based processor.
step 1: install windows on your computer FIRST. my favorite way of doing this is by creating an installation media with rufus. you can either grab and prepare two usb sticks for each OS, or you can prepare them one after the other. (pro tip: get two usb sticks, that way you can label them and store them away in case you need to reinstall windows/linux or want to install it somewhere else)
in order to do this, you need to download three things:
rufus
win10 (listen. i know switching to win11 is difficult. not much of a fan of it either. but support's gonna end for good. you will run into hiccups. it'll be frustrating for everyone involved. hate to say it, but in this case i'd opt for installing its dreadful successor over there ->) or win11
kubuntu (the download at the top is always the latest, most up-to-date one)
when grabbing your windows installation of choice pick this option here, not the media creation tool option at the top of the page:
side note: there's also very legit key sellers out there who can hook you up with cheap keys. you're allowed to do that if you use those keys privately. don't do this in an enterprise environment though. and don't waste money on it if your ultimate goal is to switch to linux entirely at one point.
from here it's very easy sailing. plug your usb drive into your computer and fire up rufus (just double click it).
🚨two very important things though!!!!!!:🚨
triple check your usb device. whatever one you selected will get wiped entirely in order to make space for your installation media. if you want to be on the safe side only plug in the ONE usb stick you want to use. and back up any music, pictures or whatever else you had on there before or it'll be gone forever.
you can only install ONE OS on ONE usb drive. so you need to do this twice, once with your kubuntu iso and once with your windows iso, on a different drive each.
done. now you can dispense windows and linux left and right, whenever and wherever you feel like it. you could, for example, start with your designated dual boot device. installing windows is now as simple as plugging the usb device into your computer and booting it up. from there, click your way through the installation process and come back to this tutorial when you're ready.
step 2: preparing the disks for a dual boot setup
on your fresh install, find your disk partitions. in your search bar enter either "diskmgr" and hit enter or just type "partitions". the former opens your disk manager right away, the latter serves you up with this "create and format hard disk partitions" search result and that's what you're gonna be clicking.
you'll end up on a screen that looks more or less like in the screenshot below. depending on how many disks you've installed this might look different, but the basic gist is the same. we're going to snip a little bit off Disk 0 and make space for kubuntu on it. my screenshot isn't the best example because i'm using the whole disk and in order to practice what i preach i'd have to go against my own advice. that piece of advice is: if this screen intimidates you and you're not sure what you're doing here, hands off your (C:) drive, EFI system, and recovery partition. however, if you're feeling particularly fearless, go check out the amount of "free space" to the right. is there more than 30gb left available? if so, you're free to right click your (C:) drive and click "shrink volume"
this screen will pop up:
the minimum disk space required for kubuntu is 25gb. the recommended one is 50gb. for an installation like this, about 30gb are enough. in order to do that, simply change the value at
Enter the amount of space to shrink in MB: to 30000
and hit Shrink.
once that's done your partitions will have changed and unallocated space at about the size of 30gb should be visible under Disk 0 at the bottom like in the bottom left of this screenshot (courtesy of microsoft.com):
this is gonna be kubuntu's new home on your disk.
step 3: boot order, BIOS/UEFI changes
all you need to do now is plug the kubuntu-usb drive you prepared earlier with rufus into your computer again and reboot that bad boy.
the next step has no screenshots. we're heading into your UEFI/BIOS (by hitting a specific key (like ESC, F10, Enter) while your computer boots up) and that'll look different for everyone reading this. if this section has you completely lost, google how to do these steps for your machine.
a good search term would be: "[YOUR DEVICE (i.e Lenovo, your mainboard's name, etc.)] change boot order"
what you need to do is to tell your computer to boot your USB before it tries to boot up windows. otherwise you won't be able to install kubuntu.
this can be done by entering your BIOS/UEFI and navigating to a point called something along the lines of "boot". from "boot order" to "booting devices" to "startup configuration", it could be called anything.
what'll be a common point though is that it'll list all your bootable devices. the topmost one is usually the one that boots up first, so if your usb is anywhere below that, make sure to drag and drop or otherwise move it to the top.
when you're done navigate to Save & Exit. your computer will then boot up kubuntu's install wizard. you'll be greeted with this:
shocker, i know, but click "Install Kubuntu" on the right.
step 4: kubuntu installation
this is a guided installation. just like when you're installing windows you'll be prompted when you need to make changes. if i remember correctly it's going to ask you for your preferred keyboard layout, a network connection, additional software you might want to install, and all of that is up to you.
but once you reach the point where it asks you where you want to install kubuntu we'll have to make a couple of important choices.
🚨 another important note 🚨
do NOT pick any of the top three options. they will overwrite your already existing windows installation.
click manual instead. we're going to point it to our unallocated disk space. hit continue. you will be shown another disk partition screen.
what you're looking for are your 30gb of free space. just like with the USB drive when we were working with rufus, make sure you're picking the right one. triple check at the very least. the chosen disk will get wiped.
click it until the screen "create a new partition" pops up.
change the following settings to:
New partition size in megabytes: 512
Use as: EFI System Partition
hit OK.
click your free space again. same procedure.
change the following settings to:
New partition size in megabytes: 8000 (*this might be different in your case, read on.)
Use As: Swap Area
hit OK
click your free space a third time. we need one more partition.
change the following settings to:
don't change anything about the partition size this time. we're letting it use up the rest of the resources.
Use as: Ext4 journaling system
Mount Point: /
you're done here as well.
*about the 8000 megabytes in the second step: this is about your RAM size. if you have 4gb instead type 4000, and so on.
once you're sure your configuration is good and ready to go, hit "Install Now". up until here you can go back and make changes to your settings. once you've clicked the button, there's no going back.
finally, select your timezone and create a user account. then hit continue. the installation should finish up... and you'll be good to go.
you'll be told to remove the USB drive from your computer and reboot your machine.
now when your computer boots up, you should end up on a black screen with a little bit of text in the top left corner. ubuntu and windows boot manager should be mentioned there. naturally, when you click ubuntu you will boot into your kubuntu. likewise if you hit windows boot manager your windows login screen will come up.
and that's that folks. go ham on messing around with your linux distro. customize it to your liking. make yourself familiar with the shell (on kubuntu, when you're on your desktop, hit CTRL+ALT+T).
for starters, you could feed it the first commands i always punch into fresh Linux installs:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install vim
(you'll thank me for the vim one later)
turn your back on windows. taste freedom. nothing sexier than open source, baby.
sources (mainly for the pictures): 1, 2
further reading for the curious: 1, 2
linux basics (includes CLI commands)
kubuntu documentation (this is your new best friend. it'll tell you everything about kubuntu that you need to know.
and finally the promised kde/gnome meme:
#windows#linuxposting#had a long day at work and i had to type this twice and i'm struggling to keep my eyes open#not guaranteeing that i didn't skip a step or something in there#so if someone linux savvy spots them feel free to point them out so i can make fixes to this post accordingly#opensource posting
122 notes
·
View notes
Text
Quick Tumblr Backup Guide (Linux)
Go to www.tumblr.com/oauth/apps and click the "Register Application" button
Fill in the form. I used the following values for the required fields: Application Name - tumblr-arch Application Website - https://github.com/Cebtenzzre/tumblr-utils Application Description - tumblr archival instance based on tumblr-utils Adminstrative contact email - < my personal email > Default callback URL - https://github.com/Cebtenzzre/tumblr-utils OAuth2 redirect URLs - https://github.com/Cebtenzzre/tumblr-utils
Get the OAuth Consumer Key for your application. It should be listed right on the www.tumblr.com/oauth/apps page.
Do python things:
# check python version: python --version # I've got Python 3.9.9 # create a venv: python -m venv --prompt tumblr-bkp --upgrade-deps venv # activate the venv: source venv/bin/activate # install dependencies: pip install tumblr-backup pip install tumblr-backup[video] pip install tumblr-backup[jq] pip install tumblr-backup[bs4] # Check dependencies are all installed: pip freeze # set the api key: tumblr-backup --set-api-key <OAuth Consumer Key>
So far I have backed up two blogs using the following:
tumblr-backup --save-audio --save-video --tag-index --save-notes --incremental -j --no-post-clobber --media-list <blog name>
There have been two issues I had to deal with so far:
one of the blogs was getting a "Non-OK API repsonse: HTTP 401 Unauthorized". It further stated that "This is a dashboard-only blog, so you probably don't have the right cookies. Try --cookiefile." I resolved the issue by a) setting the "Hide from people without an account" to off and b) enabling a custom theme. I think only step a) was actually necessary though.
"Newly registered consumers are rate limited to 1,000 requests per hour, and 5,000 requests per day. If your application requires more requests for either of these periods, please use the 'Request rate limit removal' link on an app above." Depending on how big your blog is, you may need to break up the download. I suspect using the "-n COUNT" or "--count COUNT" to save only COUNT posts at a time, combined with the "--incremental" will allow you to space things out. You would have to perform multiple passes though. I will have to play with that, so I'll report back my findings.
82 notes
·
View notes
Text
Creating a personal fanfic archive using Calibre, various Calibre plugins, Firefox Reader View, and an e-Reader / BookFusion / Calibre-Web
A few years ago I started getting serious about saving my favorite fic (or just any fic I enjoyed), since the Internet is sadly not actually always forever when it comes to fanfiction. Plus, I wanted a way to access fanfic offline when wifi wasn't available. Enter a personal fanfic archive!
There are lots of ways you can do this, but I thought I'd share my particular workflow in case it helps others get started. Often it's easier to build off someone else's workflow than to create your own!
Please note that this is for building an archive for private use -- always remember that it's bad form to publicly archive someone else's work without their explicit permission.
This is going to be long, so let's add a read more!
How to Build Your Own Personal Fanfic Archive
Step One: Install Calibre
Calibre is an incredibly powerful ebook management software that allows you to do a whole lot of stuff having to do with ebooks, such as convert almost any text-based file into an ebook and (often) vice-versa. It also allows you to easily side-load ebooks onto your personal e-reader of choice and manage the collection of ebooks on the device.
And because it's open source, developers have created a bunch of incredibly useful plugins to use with Calibre (including several we're going to talk about in the next step), which make saving and reading fanfiction super easy and fun.
But before we can do that, you need to download and install it. It's available for Windows, MacOS, Linux, and in a portable version.
Step Two: Download These Plugins
This guide would be about 100 pages long if I went into all of the plugins I love and use with Calibre, so we're just going to focus on the ones I use for saving and reading fanfiction. And since I'm trying to keep this from becoming a novel (lolsob), I'll just link to the documentation for most of these plugins, but if you run into trouble using them, just tag me in the notes or a comment and I'll be happy to write up some steps for using them.
Anyway, now that you've downloaded and installed Calibre, it's time to get some plugins! To do that, go to Preferences > Get plugins to enhance Calibre.
You'll see a pop-up with a table of a huge number of plugins. You can use the Filter by name: field in the upper right to search for the plugins below, one at a time.
Click on each plugin, then click Install. You'll be asked which toolbars to add the plugins to; for these, I keep the suggested locations (in the main toolbar & when a device is connected).
FanFicFare (here's also a great tutorial for using this plugin) EpubMerge (for creating anthologies from fic series) EbubSplit (for if you ever need to break up fic anthologies) Generate Cover (for creating simple artwork for downloaded fic) Manage Series (for managing fic series)
You'll have to restart Calibre for the plugins to run, so I usually wait to restart until I've installed the last plugin I want.
Take some time here to configure these plugins, especially FanFicFare. In the next step, I'll demonstrate a few of its features, but you might be confused if you haven't set it up yet! (Again, highly recommend that linked tutorial!)
Step Three: Get to Know FanFicFare (and to a lesser extent, Generate Cover)
FanFicFare is a free Calibre plugin that allows you to download fic in bulk, including all stories in a series as one work, adding them directly to Calibre so that that you can convert them to other formats or transfer them to your e-reader.
As with Calibre, FanFicFare has a lot of really cool features, but we're just going to focus on a few, since the docs above will show you most of them.
The features I use most often are: Download from URLs, Get Story URLs from Email, and Get Story URLs from Web Page.
Download from URLs let's you add a running list of URLs that you'd like FanFicFare to download and turn into ebooks for you. So, say, you have a bunch of fic from fanfic.net that you want to download. You can do that!
Now, in this case, I've already downloaded these (which FanFicFare detected), so I didn't update my library with the fic.
But I do have some updates to do from email, so let's try getting story URLs from email!
Woohoo, new fic! Calibre will detect when cover art is included in the downloaded file and use that, but at least one of these fic doesn't have cover art (which is the case for most of the fic I download). This is where Generate Cover comes in.
With Generate Cover, I can set the art, font, dimensions, and info content of the covers so that when I'm looking at the fic on my Kindle, I know right away what fic it is, what fandom it's from, and whether or not it's part of a series.
Okay, last thing from FanFicFare -- say I want to download all of the fic on a page, like in an author's profile on fanfic.net or all of the stories in a series. I can do that too with Get Story URLs from Web Page:
The thing I want to call out here is that I can specify whether the fic at this link are individual works or all part of an anthology, meaning if they're all works in the same series, I can download all stories as a single ebook by choosing For Anthology Epub.
Step Four: Using FireFox Reader View to Download Fic Outside of Archives
This is less common now thanks to AO3, but the elders among us may want to save fanfic that exists outside of archives on personal websites that either still exist or that exist only on the Internet Wayback Machine. FanFicFare is awesome and powerful, but it's not able to download fic from these kinds of sources, so we have to get creative.
I've done this in a couple of ways, none of which are entirely perfect, but the easiest way I've found thus far is by using Firefox's Reader View. Also, I don't think I discovered this -- I think I read about this on Tumblr, actually, although I can longer find the source (if you know it, please tell me so I can credit them!).
At any rate, open the fic in Firefox and then toggle on Reader View:
Toggling on Reader View strips all the HTML formatting from the page and presents the fic in the clean way you see in the preview below, which is more ideal for ebook formats.
To save this, go to the hamburger menu in the upper right of the browser and select Print, then switch to Print to PDF. You'll see the URL and some other stuff at the top and bottom of the pages; to remove that, scroll down until you see something like More settings... and uncheck Print headers and footers.
Click Save to download the resulting PDF, which you can then add to Calibre and convert to whichever format works best for your e-reader or archive method.
Step Five: Archiving (Choose Your Own Adventure)
Here's the really fun part: now that you know how to download your fave fanfics in bulk and hopefully have a nice little cache going, it's time to choose how you want to (privately) archive them!
I'm going to go through each option I've used in order of how easy it is to implement (and whether it costs additional money to use). I won't go too in depth about any of them, but I'm happy to do so in a separate post if anyone is interested.
Option 1: On Your Computer
If you're using Calibre to convert fanfic, then you're basically using your computer as your primary archive. This is a great option, because it carries no additional costs outside the original cost of acquiring your computer. It's also the simplest option, as it really doesn't require any advanced technical knowledge, just a willingness to tinker with Calibre and its plugins or to read how-to docs.
Calibre comes with a built-in e-book viewer that you can use to read the saved fic on your computer (just double-click on the fic in Calibre). You can also import it into your ebook app of choice (in most cases; this can get a little complicated just depending on how many fic you're working with and what OS you're on/app you're using).
If you choose this option, you may want to consider backing the fic up to a secondary location like an external hard drive or cloud storage. This may incur additional expense, but is likely still one of the more affordable options, since storage space is cheap and only getting cheaper, and text files tend to not be that big to begin with, even when there are a lot of them.
Option 2: On Your e-Reader
This is another great option, since this is what Calibre was built for! There are some really great, afforable e-readers out there nowadays, and Calibre supports most of them. Of course, this is a more expensive option because you have to acquire an e-reader in addition to a computer to run Calibre on, but if you already have an e-reader and haven't considered using it to read fanfic, boy are you in for a treat!
Option 3: In BookFusion
This is a really cool option that I discovered while tinkering with Calibre and used for about a year before I moved to a self-hosted option (see Option 4).
BookFusion is a web platform and an app (available on iOS and Android) that allows you to build your own ebook library and access it from anywhere, even when you're offline (it's the offline bit that really sold me). It has a Calibre plugin through which you can manage your ebook library very easily, including sorting your fanfic into easy-to-access bookshelves. You may or may not be able to share ebooks depending on your subscription, but only with family members.
Here's what the iOS app looks like:
The downside to BookFusion is that you'll need a subscription if you want to upload more than 10 ebooks. It's affordable(ish), ranging from $1.99 per month for a decent 5GB storage all the way to $9.99 for 100GB for power users. Yearly subs range from $18.99 to $95.99. (They say this is temporary, early bird pricing, but subscribing now locks you into this pricing forever.)
I would recommend this option if you have some cash to spare and you're really comfortable using Calibre or you're a nerd for making apps like BookFusion work. It works really well and is incredibly convenient once you get it set up (especially when you want to read on your phone or tablet offline), but even I, someone who works in tech support for a living, had some trouble with the initial sync and ended up duplicating every ebook in my BookFusion library, making for a very tedious cleanup session.
Option 4: On a Self-Hosted Server Using Calibre-Web
Do you enjoy unending confusion and frustration? Are you okay with throwing fistfuls of money down a well? Do you like putting in an incredible amount of work for something only you and maybe a few other people will ever actually use? If so, self-hosting Calibre-Web on your own personal server might be a good fit for you!
To be fair, this is likely an experience unique to me, because I am just technical enough to be a danger to myself. I can give a brief summary of how I did this, but I don't know nearly enough to explain to you how to do it.
Calibre-Web is a web app that works on top of Calibre, offering "a clean and intuitive interface for browsing, reading, and downloading eBooks."
I have a network-attached storage (NAS) server on which I run an instance of Calibre and Calibre-Web (through the miracle that is Docker). After the initial work of downloading all the fic I wanted to save and transferring it to the server, I'm now able to download all new fic pretty much via email thanks to FanFicFare, so updating my fic archive is mostly automated at this point.
If you're curious, this is what it looks like:
Pros: The interface is clean and intuitive, the ebook reader is fantastic. The Discover feature, in which you are given random books / fic to read, has turned out to be one feature worth all the irritation of setting up Calibre-Web. I can access, read, and download ebooks on any device, and I can even convert ebooks into another format using this interface. As I mentioned above, updating it with fic (and keeping the Docker container itself up to date) is relatively automated and easy now.
Cons: The server, in whichever form you choose, costs money. It is not cheap. If you're not extremely careful (and sometimes even if you are, like me) and a hard drive goes bad, you could lose data (and then you have to spend more money to replace said hard drive and time replacing said data). It is not easy to set up. You may, at various points in this journey, wish you could launch the server into the sun, Calibre-Web into the sun, or yourself into the sun.
Step Six: Profit!
That's it! I hope this was enough to get you moving towards archiving your favorite fanfic. Again, if there's anything here you'd like me to expand on, let me know! Obviously I'm a huge nerd about this stuff, and love talking about it.
#genie's stuff#calibre#calibre-web#bookfusion#personal fanfic archive#archiving fanfic#saving fanfic
103 notes
·
View notes
Text
Getting Undertale running on linux in 2024: a guide for those that cannot be assed on debian-based distributions
Step one: TRY.
this is for the humblebundle downloads only, unfortunately. i don't have + can't test the steam version. unzip. get into the folder. try good ol ./UNDERTALE on the runnable-looking thing. if that doesn't work, try chmod +x UNDERTALE and chmod +x game/runner for good measure and repeat.
if you managed that and it runs, congratulations!! YOU WON. otherwise:
Step two: SCREAM.
You probably got cryptic messages about stuff not being found when you caN SEE THEM RIGHT THERE. it's okay. it's an old game on an old engine. it's 32-bit. the messages don't help with diagnozing that but if it's THAT sort of message IT'S THE 32-BIT BULLSHIT. continue to step three.
Step three: 32-bit libs the easy part.
sudo apt install lib32z1 . try to run it again. restart if no change. now you're probably met with something MUCH more helpful and specific, like:
don't give up you're getting closer!!
Step four: 32-bit libs the bug-squashing part.
this one is annoying but you only have to do it once per machine i love you
okay, first, setup your machine for the 32-libs with
sudo dpkg --add-architecture i386 sudo apt update
^ you won't need to repeat this ever again. you're good. NOW. hunt down where the files you want are in your package directiories, depending on the distributions. I was missing libstdc++.so.6. it was in the package libstdc++6. notice the pattern. it's lowercase[number after 'so'] if you're on debian or ubuntu you'll probably only have to plug this pattern in and you're GOOD.
sudo apt install libstdc++6:i386
^ the colon part is important! that's the 32 bit bit.
wait for the install to finish, try to run the executable again and hunt down the next library . rinse and repeat until undertale kicks in and RUNS. THat's it!!! you're done, hopefully!!
the libraries I was personally missing were: libXxf86vm.so.1, libGL.so.1, libopenal.so.1, libXranr.so.2 and libGLU.so.1. I installed them from the packages libxxf86vm1:i386, libgl1:i386, libopenal1:i386, libxrandr2:i386 and libglu1:i386. all were conforming to the pattern earlier.
step five: undered. tal <3
okay now how to tag this when in two years I want to play undertale again on a new machine.
#undertale#tech#uhhh#linux#debian#ubuntu#im mostly just savinv this on a searchable blog for next time I want to explode over thishfakjhsfl
24 notes
·
View notes
Note
whats the status of like. using linux on a phone. it feels like there are two parallel universes, one that kde lives in where people use linux on phones, and one where if you google linux phones you discover theyre almost usable but they can barely make phone calls or send texts and they only run on like 4 models of phone
don't have much experience with linux on phone so anyone please correct me if i'm wrong but
one of the problems with phones is that every vendor and manufacturer adds their own proprietary driver blob to it and these have to be extracted and integrated into the kernel in order for the hardware to function.
as companies don't like to share their magic of "how does plastic slab make light", reverse engineering all your hardware is quite a difficult task. Sometimes there just isn't a driver for the camera of a phone model yet because no one was able to make it work.
So naturally, this takes a lot of time and tech is evolving fast so by the time a phone is completely compatible, next generations are already out and your new model obsolete.
Also important to note: most of this work is made by volunteers, people with a love for programming who put a lot of their own time into these things, most of them after their daytime jobs as a hobby.
Of course, there are companies and associations out there who build linux phones for a living. But the consumer hardware providers, like Pinephone, Fairphone and others out there aren't as big and don't have this much of a lobby behind them so they can't get their prices cheap. Also the manufacturers are actively working against our right to repair so we need more activism.
To make the phones still affordable (and because of said above driver issues) they have to use older hardware, sometimes even used phones from other manufacturers that they have to fix up, so you can't really expect a modern experience. At least you can revive some older phones. As everything Linux.
Then there's the software providers who many of are non-profits. KDE has Plasma Mobile, Canonical works on Ubuntu Touch, Debian has the Mobian Project and among some others there's also the Arch Linux ARM Project.
That's right baby, ARM. We're not talking about your fancy PC or ThinkPad with their sometimes even up to 64-bit processors. No no no, this is the future, fucking chrome jellyfishes and everything.
This is the stuff Apple just started building their fancy line of over-priced and over-engineered Fisher-Price laptop-desktops on and Microsoft started (Windows 10X), discontinued and beat into the smush of ChatGPT Nano Bing Open AI chips in all your new surface hp dell asus laptops.
What I was trying to say is, that program support even for the market dominating monopoles out there is still limited and.... (from my own experience from the workplace) buggy. Which, in these times of enshittification is a bad news. And the good projects you gotta emulate afterwards anyways so yay extra steps!
Speaking of extra steps: In order to turn their phone into a true freedom phone, users need to free themselves off their phones warranty, lose their shackles of not gaining root access, installing a custom recovery onto their phone (like TWRP for example), and also have more technical know-how as the typical user, which doesn't quite sounds commercial-ready to me.
So is there no hope at all?
Fret not, my friend!
If we can't put the Linux into the phone, why don't we put the phone around the Linux? You know... Like a container?
Thanks to EU regulations-
(US consumers, please buy the European versions of your phones! They are sometimes a bit more expensive, but used models of the same generation or one below usually still have warranty, are around the same price as over there in Freedom Valley, and (another side tangent incoming - because of better European consumer protection laws) sometimes have other advantages, such as faster charging and data transfer (USB-C vs lightning ports) or less bloated systems)
- it is made easier now to virtualize Linux on your phone.
You can download a terminal emulator, create a headless Linux VM and get A VNC client running. This comes with a performance limit though, as a app with standard user permissions is containerized inside of Android itself so it can't use the whole hardware.
If you have root access on your phone, you can assign more RAM and CPU to your VM.
Also things like SDL just released a new version so emulation is getting better.
And didn't you hear the news? You can run other things inside a VM on an iPhone now! Yup, and I got Debian with Xfce running on my Xiaomi phone. Didn't do much with it tho. Also Windows XP and playing Sims 1 on mobile. Was fun, but battery draining. Maybe something more for tablets for now.
Things will get interesting now that Google officially is a monopoly. It funds a lot of that stuff.
I really want a Steam Deck.
Steam phones would be cool.
#asks#linux#linuxposting#kde plasma#kde#:3#kde desktop environment#arch linux#windows#microsoft#mobile phones#linux mobile#ubuntu#debian#arch#steam#gabe newell#my lord and savior
17 notes
·
View notes
Note
After I deleted a bunch of projects (thankfully non-critical, though representing a great deal of work in total) during a recent fresh OS install, I realized that my backup practices are practically non-existent. Any tips or sources on getting started making, and eventually automating, effective backups?
I am stealing the concept here from jwz's backup guide, but I am recommending different tools, focusing on personal files only, and also addressing Windows. jwz's guide is a good reference:
Doing a way, way better job than most people of backing up one single system is very easy. Let us begin.
The most basic step of having decent backups is getting your hands on two external hard drives at least big enough to hold your entire system, and putting a label on them that says "BACKUP ONLY DO NOT USE FOR ANYTHING ELSE I AM BEING FOR REAL HERE"
Once you've got those, plug one into your system wherever it spends the most time. If you have a desktop then that's solved, if it's a laptop hopefully you already have a USB hub you plug it into when you sit down to work or whatever and you can just leave it there.
Now set up regular scheduled backups to that device. On Windows and Mac, there's a built in tool for backing up your system to an external drive. We'll assume that you just want to back up your user files on Windows and Linux, since doing full system backups isn't tricky but is kind of unnecessary.
(Ugh. Windows seems to be trying to phase out Windows Backup and Restore in favour of their File History thing. That's annoying, let me log in to windows and check how this actually works. Mac in the meantime)
Mac has Time Machine. Time Machine is extremely good, and you can tell Time Machine to save its backups to a disk. Point Time Machine at your external hard drive and tell it to schedule a backup however frequently you want. If anything goes wrong in the future, you can ask Time Machine to look at that backup disk and it'll show you a few versions of whatever you backed up there. I'm not a Mac user but I think you can even use Time Machine to transfer between an old computer and a new one.
Windows now has File History which I have never used in my life, they added it after I stopped using Windows. Same idea though, pick some folders and back them up to an external storage device. If anything goes wrong, use File History to go back through that device and find the version of the file you wanted. I don't know if there's still a way to access the older Backup and Restore system.
On Linux, my favoured way to manage simple desktop backups is Deja Dup, a GUI for Duplicity. Duplicity can do a lot more than just backup to a disk, but we'll start there. Install Deja Dup, open it up, and follow the prompts to back up your user files to the external drive. Deja Dup can also do backups to remote storage servers, Google Drive/Onedrive, and commercial storage providers like Amazon and Backblaze. It will even encrypt your backups if you are worried about Amazon spying on your files or whatever. If something goes wrong, point Deja Dup at your backup drive and it will offer you a suite of restore options covering a few versions.
Now, you have a permanently plugged in hard drive that will always get rolling backups you can restore from. These aren't safe from, say, ransomware, or your house burning down, but at least you won't lose anything when you update a computer or accidentally delete something and have an ohshit moment.
Now you take that other drive you bought, and do the same backup you're already doing to that. Now you go put it somewhere else where it's readily accessible and won't be accidentally used for anything, keep it at the office, give it to your dad, whatever. Set a reminder on your phone for once a month. Once a month, go get that drive, run another backup, and put it back. You now have better backups than many medium sized businesses.
This is impractical to scale beyond one PC, but if we're being honest even when I had like half a dozen laptops, only one contained much of value. Back up the system you care about.
Don't worry too much about making sure your backups are space efficient, like, yeah it would be a good idea to exclude game installs and stuff from your backups to save space but if that sounds daunting or time consuming literally do not do it. Decision paralysis is brain poison, just back it up and sort it out later. 2TB external hard drives are cheap.
FURTHER STEPS YOU CAN TAKE:
Easy Cloud backup: Backblaze personal backup on Windows and Mac is $6/month and pretty easy to use. If you are struggling to keep track of a monthly remote backup, or you want an easy remote backup. Backblaze is a reasonably reliable company and one of the Go To Companies in the world of data reliability. Yes, it's a cloud subscription. If you don't want that don't use it.
Network backup: If you have access to a storage server, that can be a good way to make a remote backup without having to shuttle disks around. That could be a physical server if you maintain some kind of lab, or it could be a cloud storage provider like Backblaze B2 or Onedrive or whatever. Deja Dup specifically supports backing up to a lot of different network storage providers, and even if you only have a fifty or sixty gigabytes of network storage on hand, your most essential personal files can probably fit in there.
Drive failures: Eventually one of your drives will fail, either your storage drive or your backup drive. If the storage drive fails, well, that's what the backup is for, go get a replacement and restore from the backup. If your backup drive fails, well, that's why you have two of them. As soon as humanly possible go get a replacement drive, and substitute it in for the dead one.
101 notes
·
View notes
Text
Ok this is a bit of a wordy post but bear with me. I've been reading up on the tech literacy discourse and I thought I'd add my two cents, and how it connects to piracy. LONG post under the cut!
I was born in the year 2000, which puts me on the border of being a digital native. I was brought up on tech, but only in my later childhood and teens. I've always considered myself "tech literate," but no more than the usual kid my age.
The first time I ever truly experienced tech illiteracy with my peers was when I was 23, when in one of my college classes a MacOS update rendered the software we used for said class unusable. After a few days a temporary patch was released, which by that point an assignment that utilized the software was due the next day. I followed the patch instructions, which involved navigating to the software files and substituting a designated file with the provided patch. A bit more complicated than a simple update, but the instructions were clear and intuitive enough to easily understand where the file went. The next day, during a class study session, I overheard multiple people come up to the professor complaining that the software wasn't working. After the second person complained with the professor being clueless, I asked the student what MacOS version they were on. Sure enough they were on the latest version, which as we already know is incompatible with the software. I then walked the student through the patching process step-by-step, with them needing to essentially be hand held through the entire process (almost to the point of me doing everything for them). After the patch was implemented, the student thanked me and said "Wow! How did you figure all of this out?" and to me that question was stupid- I just googled "[software] [version] MacOS [version] fix", went to the first result (which was the company website), downloaded the patch zip file, and followed the instructions on the README.txt file. It was so easy, and I couldn't comprehend that this was somehow complicated for other people, especially those my age. I mean we literally grew up using computers. It wasn't until I started learning about tech literacy and learned helplessness that I finally started connecting the dots.
Tech in general is becoming extremely user friendly, almost to a fault. UI and UX simplicity is taking away any critical thinking needed to use any sort of tech. My peers are so used to one-click and/or automatic updates, so the fact that this required slightly more effort than a simple update triggered their learned helplessness. The professor was no help in this case either, since he just extended the due date for those affected with no penalty. I actually ended up making a very detailed (and I mean idiot proof detailed) step by step picture guide with screenshots on how to install the patch for the software for the class. Anyways, back to the main point- How can I blame my peers for not knowing how to install a "complicated" update when they're so used to being spoon-fed simplicity?
But hang on- how was I the exception? I'm just as used to tech simplicity as anyone else, it's not like I'm using anything differently or making things harder for myself on purpose (I'm looking at you, linux users). So why was I the only one who knew how to install this update? It wasn't until I had a discussion many months later with my mom about this tech illiteracy epidemic that I finally thought it through. I acquired problem solving skills through piracy. To start off: not piracy but adjacent- learning to install mods in Minecraft when I was 11 taught me file navigation and what a README.txt file was, as well as the importance of version specificity/compatibility. Figuring out how to play Pokemon roms on the family computer and my iPod touch when I was 12? That's piracy, and it also taught me how to work with different platforms and the art of jailbreaking. Installing custom firmware on my 3ds so I could pirate games when I was 16 taught me how to follow written tech instructions without any visual guidance. Pirating Adobe software on my MacBook in high school taught me about patching files on MacOS. All of this knowledge and inherent googling that came with it made installing the patch for my class software look like a tiny drop in the bucket in terms of complexity.
So why am I saying all of this? Am I suggesting people learn to pirate to become tech literate?
yes.
With everything becoming pay-walled, subscription services running rampant, the proliferation of closed-source "ecosystems" *cough* Apple *cough*, and (arguably) most importantly media preservation, piracy is a skill that will serve you well in the long term. It will teach you critical thinking in the tech sphere, and if enough people learn then we can solve this ever growing epidemic of tech illiteracy. I'm not really sure how to end this post, so if anyone has anything else they'd like to add please feel free to.
Thank you for coming to my ted talk.
43 notes
·
View notes
Text
Q4OS – I setup for myself Linux with Trinity

I setup a Linux for myself. For not powerful system. With my Acer Extensa. It has two cores with 1.5 ghz. And 4gb of ram. I select in result – Q4OS. As light Linux. For not powerful systems. With its own graphic environment. It is easier, lighter. It has a name Trinity. Firm development from system’s authors. Special for this purpose – to system not to require lots of resources. And it is very good for me.

With author`s website you to download distributive. It is written with flash as a boot. One little moment, check carefully. There are, also, Live CD images. It is for run system from device. Like flash or compact disk. And, there are for setup, install. So, for install, you need this version to download. I was not accurate and first, download Live Cd. And, I see – I cannot find option to install. But, you can run system at once! I was surprised. I see what is it. And, I start to understand, what is it all about.


Installation process. It is simple and easy. Nothing tricky. Nothing hard. This is good. Installation is friendly to the user. We are moving with steps of installation. And after - system, at last, launching. It is, already, installed. It is good to check updates. What is here. All is automatically checked by itself. It has a name packages for Linux. Lists with packages. You just need to start a certain purpose program. This means network is required. Without internet you cannot to do updates.

And, later, with manager packages. You start it and see what it can show you. So, it has lots of different. I am not expert with Linux. I take it as a probe. I setup for myself a whole pack, preset for packages. Little game. It is Chocolate Doom with some pack. So, this is not only Chocolate Doom. And, also, some files included. To have all included. So, it includes FreeDoom. And this is comfortable! So, you can start to play at once. To play. Levels for FreeDoom are unique. Even, its own graphics. But, mainly, it is same Doom.

Once again, I am not expert with Linux. And, better say, I try to try this with my own. But funny thing it is. Installer is looking like it is MS DOS. So, it has such install line. And, system itself. It reminds Windows. Level Windows Xp or even like Windows 98, maybe. Such background, similar color. And windows forms.

Interesting moment! Visually, I like a lot this Trinity scheme for desktop. About functions it is very good. I am very surprised. There are lots of things here. And main - there package manager. Using it to update. And install. And this is comfortable. For start – it is good way.

Visual side looks like something Windows 98 with plus. I like a lot this visual side. Functions are good. And now it is my first launch. I little about to play Doom. I visit websites with browser. First launch was successful! And positive!

iron (hardware) and programs. From time to time i restore computers, retro computers. Try retro soft. Check some programs. And write about all of these.
Dima Link is making retro videogames, apps, a little of music, write stories, and some retro more.
WEBSITE: http://www.dimalink.tv-games.ru/home_eng.html ITCHIO: https://dimalink.itch.io/
#os#retro computer#q4os#linux#try linux#light linux#boot cd#live cd#install os#windows 98#windows xp#chocolate doom#free doom#ms dos#simple install#trinity#soft#operating system#old computer#something new#simple linux#first launch#welcome#packets#manager#not powerful pc#pc#computer expiriments#new soft#penguin
3 notes
·
View notes
Text



























07.05.25
I tried out two Linux distributions on my test laptop today.
Before I started the task, I updated Linux Mint Cinnamon with the update manager to receive the latest updates for the system and installed apps.
I can also use the terminal to update everything by typing the APT command 'sudo apt-get update'.
'sudo' elevates root (admin) privileges, 'apt-get' receives the packages needed and 'update' checks for updates. I then type 'sudo apt-get upgrade' and press enter to upgrade all the applications and the system.
System up to date!
Firstly, I downloaded Debian from: https://www.debian.org/distrib/
I chose the Debian Live MATE desktop environment.
I then went to download Linux Mint from: https://linuxmint.com/download.php
I chose the Mint MATE Edition desktop environment.
Debian MATE was 3.1 Giga-Bytes and Linux Mint MATE was 2.9 Giga-Bytes to download.
Once they had downloaded, I located them in Downloads and opened the built in USB writer application.
I wrote the ISO files to the sticks to create two bootable USB sticks.
After this step I booted Debian 12 MATE.
I selected 'try' on the boot menu screen.
Here it is in action! I played around with the user interface and tested the sound, which worked brilliantly!
Next, I booted Linux Mint 22.1 MATE.
Again, I selected 'try' to boot up the live environment.
Here it is in action! Again I tested the sound, played a YouTube video in Firefox and customised the panels and themes.
Both MATE desktops in both distros were very interesting and seemed even snappier when compared to the operating systems I have installed on this laptop, which are Linux Mint 22.1 Cinnamon and Ubuntu 24.04!
I found Debian with the MATE desktop to be the most stable environment however.
See blog below to learn more about the modern take on the classic GNOME 2 experience!
4 notes
·
View notes
Text
Ha. So I've set up this brand new laptop which i got without an OS and therefore "had" to set it up with ubuntu.
I am now, two stressless (!!) hours later, done with it, installed Steam (gonna try to play my game here now, let's see what this baby can actually give out in terms of power) and got rid of some extra software i won't need.
That's two hours in which i
formatted a usb stick
downloaded Rufus and the latest ubuntu
made a bootable usb stick
installed Linux
tried and failed to log into my wifi (damn stupid secure password) about six times before getting it right
downloaded Steam
actualized the system components
It was easy, but people say I am pretty good at these things. Still. You'll find tutorials for every step. Installing Linux is WAY EASIER than setting up Windows!!
if you feel able to change some Windows settings, I promise you will be able to set up a linux computer. Please try it. Trust me, it's worth it.
3 notes
·
View notes
Text
Installing Kali Linux on a USB Stick: A Step-by-Step Guide
If you want a portable, powerful cybersecurity toolkit you can carry in your pocket, installing Kali Linux on a USB stick is the perfect solution. With Kali on a USB, you can boot into your personalized hacking environment on almost any computer without leaving a trace — making it a favorite setup for ethical hackers, penetration testers, and cybersecurity enthusiasts.

In this guide, we'll walk you through how to install Kali Linux onto a USB drive — step-by-step — so you can have a portable Kali environment ready wherever you go.
Why Install Kali Linux on a USB?
Before we dive into the steps, here’s why you might want a Kali USB:
Portability: Carry your entire hacking setup with you.
Privacy: No need to install anything on the host machine.
Persistence: Save your settings, files, and tools even after rebooting.
Flexibility: Boot into Kali on any system that allows USB boot.
There are two main ways to use Kali on a USB:
Live USB: Runs Kali temporarily without saving changes after reboot.
Persistent USB: Saves your files and system changes across reboots.
In this article, we’ll focus on setting up a Live USB, and I'll also mention how to add persistence if you want. and if you seek knowledge about kali linux you can visit our website any time
Website Name : Linux Tools Guide
What You’ll Need
✅ A USB drive (at least 8GB; 16GB or more recommended if you want persistence). ✅ Kali Linux ISO file (download it from the official Kali website). ✅ Rufus (for Windows) or Etcher/balenaEtcher (for Mac/Linux/Windows). ✅ A computer that can boot from USB.
Step 1: Download the Kali Linux ISO
Go to the Kali Linux Downloads page and grab the latest version of the ISO. You can choose between the full version or a lightweight version depending on your USB size and system requirements.
Tip: Always verify the checksum of the ISO to ensure it hasn't been tampered with!
Step 2: Insert Your USB Drive
Plug your USB stick into your computer. ⚠️ Warning: Installing Kali onto the USB will erase all existing data on it. Backup anything important first!
Step 3: Create a Bootable Kali Linux USB
Depending on your operating system, the tool you use may vary:
For Windows Users (using Rufus):
Download and open Rufus (Get Rufus here).
Select your USB drive under Device.
Under Boot selection, choose the Kali Linux ISO you downloaded.
Keep the Partition scheme as MBR (for BIOS) or GPT (for UEFI) based on your system.
Click Start and wait for the process to complete.
For Mac/Linux Users (using balenaEtcher):
Download and open balenaEtcher (Get Etcher here).
Select the Kali ISO.
Select the USB drive.
Click Flash and wait until it's done.
That's it! You now have a Live Kali USB ready.
Step 4: Boot Kali Linux from the USB
Restart your computer with the USB plugged in.
Enter the BIOS/UEFI settings (usually by pressing a key like F12, Esc, Del, or F2 right after starting the computer).
Change the boot order to boot from the USB first.
Save changes and reboot.
You should now see the Kali Linux boot menu! Select "Live (amd64)" to start Kali without installation.
(Optional) Step 5: Adding Persistence
Persistence allows you to save files, system changes, or even installed tools across reboots — super useful for real-world usage.
Setting up persistence requires creating an extra partition on the USB and tweaking a few settings. Here's a quick overview:
Create a second partition labeled persistence.
Format it as ext4.
Mount it and create a file /persistence.conf inside it with the content: cppCopyEdit/ union
When booting Kali, choose the "Live USB Persistence" option.
Persistence is a little more technical but absolutely worth it if you want a real working Kali USB system!
Troubleshooting Common Issues
USB not showing up in boot menu?
Make sure Secure Boot is disabled in BIOS.
Ensure the USB was properly written (try writing it again if necessary).
Kali not booting properly?
Verify the ISO file integrity.
Try a different USB port (preferably USB 2.0 instead of 3.0 sometimes).
Persistence not working?
Double-check the /persistence.conf file and make sure it's correctly placed.
Conclusion
Installing Kali Linux onto a USB stick is one of the smartest ways to carry a secure, full-featured hacking lab with you anywhere. Whether you’re practicing ethical hacking, doing security audits, or just exploring the world of cybersecurity, a Kali USB drive gives you power, portability, and flexibility all at once.
Once you’re set up, the possibilities are endless — happy hacking! 🔥
2 notes
·
View notes